Unlock NextGen Product Search with ML and LLM Innovations

Hajer Bouafif and Praveen Mohan Prasad • Location: TUECHTIG • Back to Haystack EU 2024

In the realm of search, Machine Learning plays a pivotal role in enhancing the user experience throughout the entire lifecycle, from ingesting documents to delivering highly relevant results for user queries. This session will showcase various ML integrations tailored to optimize outcomes for user queries in a retail scenario, accompanied by a live demonstration. We will explore cutting-edge techniques such as query understanding and rewriting using Large Language Models (LLMs), document enrichment, sparse, dense, and hybrid retrievers, as well as contextual re-ranking of results. Discover how to harness the power of LLM agents to dynamically select the most suitable retriever for each user query, with an LLM acting as a proxy evaluator, providing feedback on the results at every iteration. This innovative approach aims to significantly improve overall retrieval quality without hurting the search latency by adopting semantic cache capabilities to reduce the LLM calls. Finally, we look at embedding retrieval or vector search and how introducing approximate vector search can degrade the accuracy so much that significantly cheaper retrieval methods will be favorable.

Hajer Bouafif

Amazon

Hajer Bouafif is a solutions architect in Data Analytics and search with a background in Big Data engineering. Hajer provides organizations with best practices and well-architected reviews to build large-scale Machine Learning search solutions.

Praveen Mohan Prasad

Amazon

Praveen Mohan Prasad is a search specialist with data science expertise who actively researches and experiments on using Machine Learning to improve search relevance. Praveen advices clients to implement and operationalise strategies to improve search experience.